Skip to main content

Introduction

Wednesday 29 – Friday 31 May 2024

Hate speech key on computer keyboard representing online defamatory comments

The aim of the three-day dialogue, ‘Addressing the rise of hatred of religion or belief,’ was to understand political, social, and ideological factors that contribute to the targeting of members of religious and other marginalised communities and the intersections between different forms of hate based on other identities, as well as identify practical strategies for addressing these issues. Civil society, government officials, journalists, activists and academics came together to share current best practices and their successes and challenges in combating hate in different governance, legal and cultural contexts with particular attention to grassroots level engagement.

A key part of this involved unpacking how the online environment and emerging technologies contribute to targeting of members of religious communities.  For example, participants discussed transnational repression, doxxing, technology facilitated gender-based violence, and the spread of hate speech and disinformation, as well as how the relationship between the online and offline environment should be understood.

Key themes

Normalisation of hate and processes of ‘Othering’

The normalisation of hate

Hatred on the basis of religion or belief does not arise in a vacuum. There is ample evidence that lack of societal inclusion and community cohesion driven by governments or communities is fertile ground for hate. Hatred on the basis of religion or belief is increasingly normalised in online and offline spaces including as a vehicle to express a wide range of underlying grievances, some of which have spurred violence.

Processes of ‘Othering’ 

Hatred on the basis of religion or belief is often a part of wider processes of ‘Othering’ which invokes historical tensions and divisions.  It often references stereotypes and images which are subsequently amalgamated with ‘notions of racial and national unity.  Groups seen as “the other” can be accused of espionage for foreign powers, moral bankruptcy, infiltration in order to destroy the dominant community, non-allegiance to the nation-State and deviance or non-conformity with the hegemonic set of societal values’ [4]. What manifests itself as hate speech can therefore be difficult to disentangle from other forms of hatred as grievances overlap.

Othering often happens when groups feel their identity is threatened.  There is a need to construct a human identity which makes othering unnatural.


Connections between hate speech, politics and elections

Discrimination and hate speech on the basis of religion or belief breaks down communities. Spikes in hate speech and discrimination on the basis of religion or belief can sometimes coincide with elections, pointing to how political actors exploit and encourage preexisting prejudicial attitudes for their own gain, often in alliance with far-right or populist parties.

Hate speech on the basis of religion or belief often exacerbates and is exacerbated by existing political and societal conflict and grief and acute escalations such as the ongoing Israel-Hamas war. For example, there was a 400% increase in Anti-Muslim and Anti-Jewish hate speech online in the United States post October 7th compared to the previous six -month baseline.

Diverse contexts, different languages, and intersectionality

‘We need humility, self-reflexivity and to understand our own positionality when we work on these issues’

Context is king

Given the complexity and intersectionality of how hatred on the basis of religion or belief manifests itself, context is king. In order to firstly understand how hate speech operates and to work to challenge it, it is helpful to involve and consult a wide range of actors, from local influencers online and offline, to community leaders, religious actors, celebrities, politicians, civil society as well as businesses and governments.

The importance of language skills and understanding the dynamism of jargon

When seeking to combat hate, language skills and understanding the nuances of local jargon is crucial. This sometimes makes defining what constitutes discriminatory speech difficult, as it can be contextual and dynamic, making global and even regional regulations or guidelines challenging. Therefore, content moderation is not just about removing harmful content, but also about training moderators and consumers to understand the development of contextually contingent language.

Deploying an intersectional lens

Fundamentally, any approach and response need to be intersectional in their framing. This means acknowledging that the form that hatred and violence takes is dependent on who it is directed at and is not disconnected from other structures of oppression. A term originally coined by legal scholar Kimberlé Crenshaw, an intersectional approach acknowledges the interconnected nature of social categorisations such as race, gender identity, sexual orientation, religion, ability and social class, which overlap to create interdependent systems of discrimination or disadvantage. These realities are often context specific and might mean that certain people (e.g. impoverished, rural women adhering to a minority religion or belief) are at an increased risk of suffering hatred and violence on the basis of religion or belief), which might also express itself differently than for others (e.g. denial of access to water sources, healthcare or education).

Understanding online and offline environments

How can we get to a place where we amplify the voices for good? The marketplace is designed to amplify the voices of hate’

How are online and offline connected?

Online and offline hatred on the basis of religion or belief is initiated by a variety of actors, sometimes to gain influence or benefit. This includes anyone from anonymous online users, bots, violent extremists, as well as members of political parties and government officials, civil society and religious actors. The online environment and emerging technologies such as AI, deep fakes, and the use of online platforms – including Reddit, Discord, Twitch and traditional social media such as Facebook, Twitter/X, and Instagram – are sometimes used to dox, harass, and propagate hate that manifests in offline impacts.

There is an urgent need to better understand how online and offline worlds interact as they should not be thought of as different environments, but ones that feed off one another, often in negative ways. There are examples where the online environment generates hatred, and where it amplifies and intensifies hatred and direct targeting.  Social media provides anonymity that enables users to vent frustrations and say hateful things without having to face immediate consequences.  This lack of accountability generates a permissive hate culture that puts people at risk both online and offline. In this sense, social media is not a reflection of how we are, but often reflects the worst part of us.

Global vs. local dimensions

Social media is often praised for its ability to connect people all over the world. However, this can also mean that people engage less where they live but seek communities of likeminded folks online. Close knit virtual communities often form echo chambers that amplify and exaggerate ideas with little opposing information or viewpoints to balance messages. It can also limit opportunities to form bridging connections with other individuals from different backgrounds or perspectives. This hampers efforts to build positive and religiously inclusive communities that are rooted in the local community. Real interpersonal connections and relationships built locally and across identities have proven to build resilience and help foster inclusive communities.

Positive counter-messages

While there are growing examples of hateful messages online that feed into offline spaces, there are also examples of counter messaging and of religious communities using online spaces constructively. For example, Pope Francis has an active online presence which seeks to combat hate online through positive messages of unity and care across religious faith groups. The challenge is often that these messages do not get amplified as algorithms tend to privilege messages that cause outrage, which favours hate over hope.  Individuals are more likely to share hateful than positive messages, which makes counter-messaging challenging.

Not everyone is online

 It is important to bear in mind approximately 37% of the world’s population still do not have access to the internet[5], although many of these do have mobile phones. In some contexts, it is as important to focus on combatting hatred spreading through other media, such as local radio stations and more traditional media outlets as it is the use of the internet and social media. While 30 years ago, it is worth recalling that the use of radio was a key part of spreading hate messaging during the Rwandan Genocide and mass-produced CDs, DVDs, and written media contributed to anti-Muslim violence in Myanmar in 2013.

Opportunities and limitations of legal frameworks and the rule of law

FoRB and Human Rights

The Universal Declaration of Human Rights prohibits the abuse of rights, and FoRB is enshrined as a core human right. This can be a good tool but is more challenging at a time when the human rights system has been undermined by a number of actors, including ones that were historically proponents of it.

Prohibition and obligations

Human rights systems protect against hatred on the basis of religion or belief in two ways:

i) ‘negative’ obligations like legislating against HR abuses;

ii) and ‘positive’ obligations of states to ensure that rights are protected.

The focus on positive obligations can be a more effective legal framework and affords opportunities in terms of restricting hate speech. In the case of Georgian Muslim Relations and Others v Georgia applicants wanted to open a Muslim boarding school and received several threats from the local population from which the police failed to protect them. The European Court of Human Rights decided that Georgia violated the right to freedom of religion or belief of the applicants for not doing enough to prevent the threats against the applicants and not allowing them to exercise their religion.

Blasphemy laws

At the same time, there are challenges associated with appealing to the law. Criminalisation is not the answer to hate and cannot be called upon to build inclusive communities. Depending on existing legal frameworks and national contexts, the law can become instrumentalised in order to privilege religious majority communities or settle personal scores, for example through blasphemy laws. The Pew Research Centre found that in 2019, 40% of the world had some form of blasphemy law, defined as speech or actions considered to be contemptuous of God or of people or objects considered sacred. These laws negatively implicate a host of other rights including non-discrimination, minority rights, and the right to liberty and security of person, with religious minorities and those with no faith particularly vulnerable.  Furthermore, their use is often entangled in wider authoritarian political aims.

Social media and technology companies

Engaging with technology companies

In order to address this issue substantively, technology companies need to be engaged effectively. While there are some examples of tech companies engaging on hate speech and FoRB related abuse online, there is little consistency and regulation is fragmented.  Social media companies benefit from high activity levels which are more easily generated by negative, hateful posts than positive messaging.  Social media companies could also devote more resources to media monitoring in local languages.

META is one technology company that has engaged to a greater extent than others and, having done so, has seen a decline in hate speech online. While this might be seen to be a positive step, the worry is that hateful content is simply moving on to other platforms, in what is known as ‘platform creep’.  

The challenges of regulating hate speech and mapping developments 

How to respond to hate speech online in terms of regulation is another challenge. While incitement to violence requires criminalisation and legal responses, how to respond to more subtle microaggressions, less direct, but often very harmful words and language that do not quite hit the threshold, is a more difficult challenge. For example, a mapping of religious hate speech in Lebanon found that, while a small portion used veiled and explicit threats of violence or encouraged others to use violence, the majority of content classed as hate speech denigrated and dehumanised individuals or groups based on their religious identity. Additional research is needed to understand what enablers exist that ‘push’ hateful speech above the threshold where it becomes clear incitement to violence.

Centring children and youth in FoRB work

‘Are we confident that we can vaccinate children against hate in their future life?’

Recognising the role that children and young people already play

Young people play a decisive role in combatting hate speech and discrimination on the basis of religion or belief. All societal actors and stakeholders should recognise them as key protagonists of positive change. When working with young people to combat hate speech, it is important to spread the net widely and ensure that a diverse group of young people are reached. In this work, trust needs to be put in young people and their ability to lead in this area. This often means seeking out alternative ways of engagement, that go beyond conferences and traditional teaching spaces.

Supporting and protecting young FoRB advocates

While adequate risk management is important for all FoRB advocates, it is especially important for young people who have additional vulnerabilities. Young people who are active in building mutual respect and combatting FoRB related hate speech are often at additional risk of attack. Those who speak out in defence of other people’s right to FoRB are often themselves at risk and they need sustained support from a wider national and international community.


[4] Nazila Ganeha UNGA report, p.3.

[5] United Nations. The UN Intranet-iSeek for Member States. ITU: 2.9 billion people still offline. United Nations.  https://www.un.org/en/delegate/itu-29-billion-people-still offline#:~:text=However%2C%20ITU%20data%20confirm%20that,never%2C%20ever%20used%20the%20Internet [accessed 24 June 2024].

Previous

Addressing the rise in global hatred on the basis of religion or belief

Next

Best practice examples

Want to find out more?

Sign up to our newsletter